形状空间学习的任务涉及使用良好的概括性属性映射到从潜在表示空间的列车组。通常,真实世界的形状系列具有对称性,可以定义为不改变形状本质的转换。在形状空间学习中纳入对称性的自然方式是要求将其映射到形状空间(编码器)和从形状空间(解码器)映射到相关的对称。在本文中,我们通过引入两个贡献,提出了一种在编码器和解码器中融入设备和解码器的框架:(i)适应建设通用,高效和最大富有表现力的Autorencoders的最近帧平均(FA)框架; (ii)构建自动化器等于分段欧几里德运动的分段应用于形状的不同部分。据我们所知,这是第一个完全分段的欧几里德的欧洲等自动化器建设。培训我们的框架很简单:它使用标准的重建损失,不需要引入新的损失。我们的体系结构由标准(骨干网)架构构成,具有适当的帧平均,使其成为等效。使用隐式的神经表示,在两个刚性形状数据集上测试我们的框架,并使用基于网格的神经网络的铰接形状数据集显示出技术的概括,以通过大边缘改善相关基线。特别地,我们的方法表明了概括铰接姿势的概括性的显着改善。
translated by 谷歌翻译
In this work we address the challenging problem of multiview 3D surface reconstruction. We introduce a neural network architecture that simultaneously learns the unknown geometry, camera parameters, and a neural renderer that approximates the light reflected from the surface towards the camera. The geometry is represented as a zero level-set of a neural network, while the neural renderer, derived from the rendering equation, is capable of (implicitly) modeling a wide set of lighting conditions and materials. We trained our network on real world 2D images of objects with different material properties, lighting conditions, and noisy camera initializations from the DTU MVS dataset. We found our model to produce state of the art 3D surface reconstructions with high fidelity, resolution and detail.
translated by 谷歌翻译
Representing shapes as level sets of neural networks has been recently proved to be useful for different shape analysis and reconstruction tasks. So far, such representations were computed using either: (i) pre-computed implicit shape representations; or (ii) loss functions explicitly defined over the neural level sets.In this paper we offer a new paradigm for computing high fidelity implicit neural representations directly from raw data (i.e., point clouds, with or without normal information). We observe that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions.We provide a theoretical analysis of this property for the linear case, and show that, in practice, our method leads to state of the art implicit neural representations with higher level-of-details and fidelity compared to previous methods.
translated by 谷歌翻译
Figure 1: We introduce SAL: Sign Agnostic Learning for learning shapes directly from raw data, such as triangle soups (left in each gray pair; back-faces are in red). Right in each gray pair -the surface reconstruction by SAL of test raw scans; in gold -SAL latent space interpolation between adjacent gray shapes. Raw scans are from the D-Faust dataset [8].
translated by 谷歌翻译
Recent work attributes progress in NLP to large language models (LMs) with increased model size and large quantities of pretraining data. Despite this, current state-of-the-art LMs for Hebrew are both under-parameterized and under-trained compared to LMs in other languages. Additionally, previous work on pretrained Hebrew LMs focused on encoder-only models. While the encoder-only architecture is beneficial for classification tasks, it does not cater well for sub-word prediction tasks, such as Named Entity Recognition, when considering the morphologically rich nature of Hebrew. In this paper we argue that sequence-to-sequence generative architectures are more suitable for LLMs in the case of morphologically rich languages (MRLs) such as Hebrew. We demonstrate that by casting tasks in the Hebrew NLP pipeline as text-to-text tasks, we can leverage powerful multilingual, pretrained sequence-to-sequence models as mT5, eliminating the need for a specialized, morpheme-based, separately fine-tuned decoder. Using this approach, our experiments show substantial improvements over previously published results on existing Hebrew NLP benchmarks. These results suggest that multilingual sequence-to-sequence models present a promising building block for NLP for MRLs.
translated by 谷歌翻译
Inference from large autoregressive models like Transformers is slow - decoding K tokens takes K serial runs of the model. In this work we introduce speculative decoding - an algorithm to sample from autoregressive models faster without any changes to the outputs, by computing several tokens in parallel. At the heart of our approach lie the observations that (1) hard language-modeling tasks often include easier subtasks that can be approximated well by more efficient models, and (2) using speculative execution and a novel sampling method, we can make exact decoding from the large models faster, by running them in parallel on the outputs of the approximation models, potentially generating several tokens concurrently, and without changing the distribution. Our method supports existing off-the-shelf models without retraining or architecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration compared to the standard T5X implementation, with identical outputs.
translated by 谷歌翻译
Denoising diffusion models (DDMs) have led to staggering performance leaps in image generation, editing and restoration. However, existing DDMs use very large datasets for training. Here, we introduce a framework for training a DDM on a single image. Our method, which we coin SinDDM, learns the internal statistics of the training image by using a multi-scale diffusion process. To drive the reverse diffusion process, we use a fully-convolutional light-weight denoiser, which is conditioned on both the noise level and the scale. This architecture allows generating samples of arbitrary dimensions, in a coarse-to-fine manner. As we illustrate, SinDDM generates diverse high-quality samples, and is applicable in a wide array of tasks, including style transfer and harmonization. Furthermore, it can be easily guided by external supervision. Particularly, we demonstrate text-guided generation from a single image using a pre-trained CLIP model.
translated by 谷歌翻译
高能量密度物理(HEDP)实验通常涉及在低密度泡沫内部传播的动态波 - 前。这种效果会影响其密度,因此影响其透明度。泡沫生产中的一个常见问题是产生有缺陷的泡沫。需要有关其尺寸和同质性的准确信息来对泡沫的质量进行分类。因此,这些参数使用3D测量激光共聚焦显微镜进行表征。对于每个泡沫,拍摄五个图像:两张2D图像,代表顶部和底部泡沫平面和3D扫描的侧面横截面的三张图像。专家必须通过图像集进行手动对泡沫质量进行分类的复杂,苛刻和疲惫的工作,然后才能确定是否可以在实验中使用泡沫。目前,质量有两个二元级别的正常与缺陷。同时,通常需要专家来对正常缺陷的子类别进行分类,即有缺陷但可能需要实验的泡沫。由于不确定的判断,该子类是有问题的,这主要是直观的。在这项工作中,我们提出了一种新颖的最先进的多视图深度学习分类模型,该模型通过自动确定泡沫的质量分类并因此有助于专家来模仿物理学家的观点。我们的模型在上表面和下表面泡沫平面上达到了86 \%的精度,整个集合中达到了82 \%,这表明了该问题的有趣启发式方法。这项工作中的一个重大价值是能够回归泡沫质量而不是二进制扣除,甚至可以在视觉上解释该决定。本工作中使用的源代码以及其他相关来源可在以下网址获得:https://github.com/scientific-computing-lab-nrcn/multi-view-foams.git
translated by 谷歌翻译
文本对图像模型提供了前所未有的自由,可以通过自然语言指导创作。然而,尚不清楚如何行使这种自由以生成特定独特概念,修改其外观或以新角色和新颖场景构成它们的图像。换句话说,我们问:我们如何使用语言指导的模型将猫变成绘画,或者想象基于我们喜欢的玩具的新产品?在这里,我们提出了一种简单的方法,可以允许这种创造性自由。我们仅使用3-5个用户提供的概念(例如对象或样式)的图像,我们学会通过在冷冻文本到图像模型的嵌入空间中通过新的“单词”表示它。这些“单词”可以组成自然语言句子,以直观的方式指导个性化的创作。值得注意的是,我们发现有证据表明单词嵌入足以捕获独特而多样的概念。我们将我们的方法比较了各种基线,并证明它可以更忠实地描绘出一系列应用程序和任务的概念。我们的代码,数据和新单词将在以下网址提供:https://textual-inversion.github.io
translated by 谷歌翻译
医学图像分析中使用的深度学习模型很容易由于其黑盒性质而引起的可靠性问题。为了阐明这些黑盒模型,先前的作品主要集中在识别输入特征对诊断的贡献,即功能归因。在这项工作中,我们探讨了反事实解释,以确定模型依赖于诊断的模式。具体而言,我们研究了胸部X射线内变化特征对分类器输出的影响,以了解其决策机制。我们利用一种基于样式的方法(StyleEx)来通过操纵其潜在空间中的特定潜在方向来为胸部X射线射线创建反事实解释。此外,我们建议本本芬大大减少生成解释的计算时间。我们在放射科医生的帮助下临床评估反事实解释的相关性。我们的代码公开可用。
translated by 谷歌翻译